perm filename PSYCHO.XGP[E83,JMC] blob sn#720286 filedate 1983-07-24 generic text, type T, neo UTF8
/LMAR=0/XLINE=3/FONT#0=BAXL30/FONT#1=BAXM30/FONT#2=BASB30/FONT#3=SUB/FONT#4=SUP/FONT#5=BASL35/FONT#6=NGR25/FONT#7=MATH30/FONT#8=FIX25/FONT#9=GRKB30
␈↓ α∧␈↓␈↓ u1


␈↓ α∧␈↓α␈↓ β␈THE LITTLE THOUGHTS OF THINKING MACHINES

␈↓ α∧␈↓␈↓ αTWhen␈α∂we␈α∂interact␈α∂with␈α∂computers␈α∂and␈α∂other␈α∂machines,␈α∂we␈α∂often␈α∂use␈α∂language␈α∂ordinarily

␈↓ α∧␈↓used␈αfor␈αtalking␈αabout␈αpeople.␈α
 We␈αmay␈αsay␈αof␈αa␈αvending␈α
machine,␈α"It␈αwants␈αanother␈αten␈αcents␈α
for

␈↓ α∧␈↓a␈α
lousy␈α
candy␈α
bar."␈α
We␈α
may␈α
say␈α
of␈α∞an␈α
automatic␈α
teller␈α
machine,␈α
"It␈α
thinks␈α
I␈α
don't␈α∞have␈α
enough

␈↓ α∧␈↓money␈α
in␈α
my␈α
account␈α
because␈α
it␈α
doesn't␈α
yet␈α
know␈α
about␈α
the␈α
deposit␈α
I␈α
made␈α
this␈α∞morning."␈α
This

␈↓ α∧␈↓article␈αis␈αabout␈αwhen␈αwe're␈αright␈αor␈αalmost␈αright␈αin␈αsaying␈αthese␈αthings,␈αand␈αwhen␈αit's␈αa␈αgood␈αidea

␈↓ α∧␈↓to think of machines that way.

␈↓ α∧␈↓␈↓ αTFor␈α∀more␈α∀than␈α∀a␈α∀century␈α∀we␈α∀have␈α∀used␈α∀machines␈α∀in␈α∀our␈α∀daily␈α∀lives␈α∀whose␈α∪detailed

␈↓ α∧␈↓functioning␈α⊂most␈α∂of␈α⊂us␈α⊂don't␈α∂understand.␈α⊂ Few␈α∂of␈α⊂us␈α⊂know␈α∂much␈α⊂about␈α∂how␈α⊂the␈α⊂electric␈α∂light

␈↓ α∧␈↓system␈α∞or␈α
the␈α∞telephone␈α
system␈α∞work␈α
internally.␈α∞ We␈α
do␈α∞know␈α
their␈α∞external␈α
behavior;␈α∞we␈α
know

␈↓ α∧␈↓that␈α
lights␈α
are␈α
turned␈αon␈α
and␈α
off␈α
by␈αswitches␈α
and␈α
how␈α
to␈αdial␈α
telephone␈α
numbers.␈α
 We␈α
may␈αnot

␈↓ α∧␈↓know␈αmuch␈αabout␈αthe␈αinternal␈α
combustion␈αengine,␈αbut␈αwe␈αknow␈α
that␈αa␈αcar␈αneeds␈αmore␈α
gas␈αwhen

␈↓ α∧␈↓the gauge reads near EMPTY.

␈↓ α∧␈↓␈↓ αTIn␈α∞the␈α∞next␈α∂century␈α∞we'll␈α∞be␈α∂increasingly␈α∞faced␈α∞with␈α∂much␈α∞more␈α∞complex␈α∂computer␈α∞based

␈↓ α∧␈↓systems.␈α
 It␈α
won't␈α
be␈α
necessary␈α
for␈α
most␈αpeople␈α
to␈α
know␈α
very␈α
much␈α
about␈α
how␈α
they␈αwork␈α
internally,

␈↓ α∧␈↓but␈αwhat␈αwe␈αwill␈αhave␈αto␈α
know␈αabout␈αthem␈αin␈αorder␈αto␈α
use␈αthem␈αis␈αmore␈αcomplex␈αthan␈α
what␈αwe

␈↓ α∧␈↓need␈α∀to␈α∀know␈α∀about␈α∀electric␈α∀lights␈α∃and␈α∀telephones.␈α∀ As␈α∀our␈α∀daily␈α∀lives␈α∀involve␈α∃ever␈α∀more

␈↓ α∧␈↓sophisticated␈αcomputers,␈α
we␈αwill␈αfind␈α
that␈αascribing␈α
little␈αthoughts␈αto␈α
machines␈αwill␈αbe␈α
increasingly

␈↓ α∧␈↓useful in understanding how to get the most good out of them.

␈↓ α∧␈↓␈↓ αTMuch␈αthat␈αwe'll␈α
need␈αto␈αknow␈α
concerns␈αthe␈αinformation␈αstored␈α
in␈αcomputers,␈αwhich␈α
is␈αwhy

␈↓ α∧␈↓we␈α∂find␈α∂ourselves␈α∞using␈α∂psychological␈α∂words␈α∞like␈α∂"knows",␈α∂"thinks",␈α∞and␈α∂"wants"␈α∂in␈α∂referring␈α∞to

␈↓ α∧␈↓machines,␈αeven␈αthough␈αmachines␈αare␈αvery␈αdifferent␈αfrom␈αhumans␈αand␈αthese␈αwords␈αarose␈αfrom␈αthe

␈↓ α∧␈↓human need to talk about other humans.

␈↓ α∧␈↓␈↓ αTAccording␈αto␈α
some␈αauthorities,␈α
to␈αuse␈α
these␈αwords,␈α
the␈αlanguage␈α
of␈αthe␈α
mind,␈αto␈α
talk␈αabout
␈↓ α∧␈↓␈↓ u2


␈↓ α∧␈↓machines␈αis␈αto␈αcommit␈αthe␈αerror␈αof␈αanthropomorphism.␈α Anthropomorphism␈αis␈αoften␈αan␈αerror,␈αall

␈↓ α∧␈↓right,␈α∂but␈α∂it␈α∂is␈α∂going␈α∂to␈α∂be␈α∂increasingly␈α∂difficult␈α∂to␈α∂understand␈α∂machines␈α∂without␈α∂using␈α∞mental

␈↓ α∧␈↓terms.

␈↓ α∧␈↓␈↓ αTEver␈α⊂since␈α⊂Descartes,␈α⊃philosophically␈α⊂minded␈α⊂people␈α⊂have␈α⊃wrestled␈α⊂with␈α⊂the␈α⊃question␈α⊂of

␈↓ α∧␈↓whether␈α∞it␈α∞is␈α∂possible␈α∞for␈α∞machines␈α∞to␈α∂think.␈α∞ As␈α∞we␈α∞interact␈α∂more␈α∞and␈α∞more␈α∞with␈α∂computers␈α∞-

␈↓ α∧␈↓both␈α
personal␈α
computers␈α
and␈α
others␈α
-␈α
the␈α
questions␈α
of␈α
whether␈α
machines␈α
can␈α
think␈α
and␈α
what␈α
kind

␈↓ α∧␈↓of␈αthoughts␈α
they␈αcan␈αhave␈α
become␈αever␈αmore␈α
pertinent.␈α We␈αcan␈α
ask␈αwhether␈αmachines␈α
remember,

␈↓ α∧␈↓believe,␈α∂know,␈α∞intend,␈α∂like␈α∂or␈α∞dislike,␈α∂want,␈α∂understand,␈α∞promise,␈α∂owe,␈α∂have␈α∞rights␈α∂or␈α∂duties,␈α∞or

␈↓ α∧␈↓deserve␈α⊂rewards␈α⊂or␈α⊃punishment.␈α⊂ Is␈α⊂this␈α⊃an␈α⊂all-or-nothing␈α⊂question,␈α⊂or␈α⊃can␈α⊂we␈α⊂say␈α⊃that␈α⊂some

␈↓ α∧␈↓machines do some of these things and not others, or that they do them to some extent?

␈↓ α∧␈↓␈↓ αTMy␈α
answer␈α
is␈α
based␈α
on␈α
work␈α
in␈αthe␈α
field␈α
of␈α
artificial␈α
intelligence␈α
(usually␈α
abbreviated␈αAI)

␈↓ α∧␈↓which␈α∞is␈α∞the␈α∞science␈α∞and␈α∞engineering␈α∞of␈α
making␈α∞computers␈α∞solve␈α∞problems␈α∞and␈α∞behave␈α∞in␈α
ways

␈↓ α∧␈↓generally considered to be intelligent.

␈↓ α∧␈↓␈↓ αTAI␈αresearch␈αusually␈αinvolves␈αprogramming␈αa␈αcomputer␈αto␈αuse␈αspecific␈αconcepts␈αand␈αto␈αhave

␈↓ α∧␈↓specific␈α⊂mental␈α⊂qualities.␈α⊂ Each␈α∂step␈α⊂is␈α⊂difficult,␈α⊂and␈α∂different␈α⊂programs␈α⊂have␈α⊂different␈α∂mental

␈↓ α∧␈↓qualities.␈α∞ Some␈α∞programs␈α
acquire␈α∞information␈α∞from␈α
people␈α∞or␈α∞other␈α
programs␈α∞and␈α∞plan␈α
actions

␈↓ α∧␈↓for␈α
people␈α
that␈αinvolve␈α
what␈α
other␈αpeople␈α
do.␈α
 Such␈α
programs␈αmust␈α
ascribe␈α
beliefs,␈αknowledge␈α
and

␈↓ α∧␈↓goals␈α∪to␈α∀other␈α∪programs␈α∪and␈α∀people.␈α∪ Thinking␈α∪about␈α∀when␈α∪they␈α∪should␈α∀do␈α∪so␈α∪led␈α∀to␈α∪the

␈↓ α∧␈↓considerations of this article.

␈↓ α∧␈↓␈↓ αTAI␈α∞researchers␈α
now␈α∞believe␈α
that␈α∞much␈α
behavior␈α∞can␈α
be␈α∞understood␈α
using␈α∞the␈α∞principle␈α
of

␈↓ α∧␈↓rationality:

␈↓ α∧␈↓␈↓ αT␈↓↓It will do what it thinks will achieve its goals.␈↓

␈↓ α∧␈↓␈↓ αTWhat␈α
behavior␈α
is␈α
predicted␈α
then␈α
depends␈α
on␈α
what␈α
goals␈α
and␈α
beliefs␈α
are␈α
ascribed.␈α
 The␈α
goals

␈↓ α∧␈↓themselves need not be justified as rational.
␈↓ α∧␈↓␈↓ u3


␈↓ α∧␈↓␈↓ αTAdopting␈α∩this␈α∩principle␈α∩of␈α∩rationality,␈α∩we␈α∩see␈α∩that␈α∩different␈α∩machines␈α∩have␈α∩intellectual

␈↓ α∧␈↓qualities␈α∩to␈α⊃differing␈α∩extents.␈α∩ Even␈α⊃some␈α∩very␈α⊃simple␈α∩machines␈α∩can␈α⊃be␈α∩usefully␈α∩regarded␈α⊃as

␈↓ α∧␈↓having␈αsome␈α
intellectual␈αqualities.␈α Machines␈α
have␈αand␈α
will␈αhave␈αvaried␈α
little␈αminds.␈α Long␈α
before

␈↓ α∧␈↓we␈α⊃can␈α⊃make␈α⊃machines␈α⊃with␈α⊃human␈α⊃capability,␈α⊃we␈α⊃will␈α⊃have␈α⊃many␈α⊃machines␈α⊃that␈α⊃cannot␈α⊃be

␈↓ α∧␈↓understood␈α
except␈α
in␈α
mental␈α
terms.␈α
 Machines␈α
can␈αand␈α
will␈α
be␈α
given␈α
more␈α
and␈α
more␈αintellectual

␈↓ α∧␈↓qualities;␈α⊂not␈α⊂even␈α⊂human␈α⊂intelligence␈α⊂is␈α⊂a␈α⊂limit.␈α⊂ However,␈α⊂artificial␈α⊂intelligence␈α⊂is␈α⊃a␈α⊂difficult

␈↓ α∧␈↓branch␈α∞of␈α∞science␈α∞and␈α∞engineering,␈α∞and,␈α∞judging␈α∞by␈α∞present␈α∞slow␈α∞progress,␈α∞it␈α∞might␈α∞take␈α∞a␈α
long

␈↓ α∧␈↓time.␈α∂ From␈α∞the␈α∂time␈α∞of␈α∂Mendel's␈α∞experiments␈α∂with␈α∂peas␈α∞to␈α∂the␈α∞cracking␈α∂of␈α∞the␈α∂DNA␈α∂code␈α∞for

␈↓ α∧␈↓proteins, a hundred years elapsed, and genetics isn't done yet.

␈↓ α∧␈↓␈↓ αTPresent␈α
machines␈αhave␈α
almost␈αno␈α
emotional␈α
qualities,␈αand,␈α
in␈αmy␈α
opinion,␈α
it␈αwould␈α
be␈αa␈α
bad

␈↓ α∧␈↓idea␈α∞to␈α
give␈α∞them␈α
any.␈α∞ We␈α
have␈α∞enough␈α∞trouble␈α
figuring␈α∞out␈α
our␈α∞duties␈α
to␈α∞our␈α∞fellow␈α
humans

␈↓ α∧␈↓and␈αto␈αanimals␈αwithout␈αcreating␈αa␈αbunch␈αof␈αrobots␈αwith␈αqualities␈αthat␈αwould␈αallow␈αanyone␈αto␈αfeel

␈↓ α∧␈↓sorry for them or would allow them to feel sorry for themselves.

␈↓ α∧␈↓␈↓ αTSince␈αI␈αadvocate␈αsome␈αanthropomorphism,␈αI'd␈αbetter␈αexplain␈αwhat␈αI␈αconsider␈αgood␈αand␈αbad

␈↓ α∧␈↓anthropomorphism.␈α
 Anthropomorphism␈α
is␈α∞the␈α
ascription␈α
of␈α∞human␈α
characteristics␈α
to␈α∞things␈α
not

␈↓ α∧␈↓human.␈α When␈αis␈αit␈αa␈αgood␈αidea␈αto␈αdo␈αthis?␈α When␈αit␈αsays␈αsomething␈αthat␈αcannot␈αas␈αconveniently

␈↓ α∧␈↓be said some other way.

␈↓ α∧␈↓␈↓ αTDon't␈α
get␈α
me␈αwrong.␈α
 The␈α
kind␈αof␈α
anthropomorphism␈α
where␈αsomeone␈α
says,␈α
"This␈αterminal

␈↓ α∧␈↓hates␈αme!"␈αand␈αbashes␈αit,␈αis␈αjust␈αas␈αsilly␈αas␈αever.␈α It␈αis␈αalso␈αcommon␈αto␈αascribe␈αpersonalities␈αto␈αcars,

␈↓ α∧␈↓boats,␈α
and␈α
other␈αmachinery.␈α
 It␈α
is␈α
hard␈αto␈α
say␈α
whether␈α
anyone␈αtakes␈α
this␈α
seriously.␈α
 Anyway,␈αI'm

␈↓ α∧␈↓not supporting any of these things.

␈↓ α∧␈↓␈↓ αTThe␈αreason␈αfor␈α
ascribing␈αmental␈αqualities␈α
and␈αmental␈αprocesses␈αto␈α
machines␈αis␈αthe␈α
same␈αas

␈↓ α∧␈↓for␈α
ascribing␈α
them␈α
to␈α
other␈α
people.␈α
 It␈α
helps␈αunderstand␈α
what␈α
they␈α
will␈α
do,␈α
how␈α
our␈α
actions␈αwill

␈↓ α∧␈↓affect them, how to compare them with ourselves and how to design them.
␈↓ α∧␈↓␈↓ u4


␈↓ α∧␈↓␈↓ αTResearchers␈α∪in␈α∩artificial␈α∪intelligence␈α∪(AI)␈α∩are␈α∪interested␈α∩in␈α∪the␈α∪use␈α∩of␈α∪mental␈α∪terms␈α∩to

␈↓ α∧␈↓describe␈αmachines␈α
for␈αtwo␈α
reasons.␈α First␈αwe␈α
want␈αto␈α
provide␈αmachines␈α
with␈αtheories␈αof␈α
knowledge

␈↓ α∧␈↓and␈αbelief␈αso␈α
they␈αcan␈αreason␈α
about␈αwhat␈αtheir␈α
users␈αknow,␈αdon't␈α
know,␈αand␈αwant.␈α
 Second␈αwhat

␈↓ α∧␈↓the user knows about the machine can often best be expressed using mental terms.

␈↓ α∧␈↓␈↓ αTSuppose␈αI'm␈α
using␈αan␈α
automatic␈αteller␈α
machine␈αat␈α
my␈αbank.␈α
 I␈αmay␈α
make␈αstatements␈αabout␈α
it

␈↓ α∧␈↓like,␈α"It␈αwon't␈α
give␈αme␈αany␈αcash␈α
because␈αit␈αknows␈α
there's␈αno␈αmoney␈αin␈α
my␈αaccount,"␈αor,␈α
"It␈αknows

␈↓ α∧␈↓who␈α∞I␈α∞am␈α∞because␈α∂I␈α∞gave␈α∞it␈α∞my␈α∂secret␈α∞number".␈α∞ We␈α∞need␈α∞not␈α∂acribe␈α∞to␈α∞the␈α∞teller␈α∂machine␈α∞the

␈↓ α∧␈↓thought,␈α
"There's␈α
no␈α
money␈α
in␈α
his␈α
account,"␈α
as␈α
its␈α
reason␈α
to␈α
refuse␈α
to␈α
give␈α
me␈α
cash.␈α
 But␈α
it␈α
was

␈↓ α∧␈↓designed␈α
to␈α
act␈αas␈α
if␈α
it␈α
does,␈αand␈α
if␈α
I␈α
want␈αto␈α
figure␈α
out␈α
how␈αto␈α
make␈α
it␈α
give␈αme␈α
cash␈α
in␈αthe␈α
future,

␈↓ α∧␈↓I should treat it as if it knows that sort of thing.

␈↓ α∧␈↓␈↓ αTIt's␈α
difficult␈α
to␈α
be␈α
rigorous␈α
about␈α
whether␈α
a␈α
machine␈α
really␈α
"knows",␈α
"thinks",␈α∞etc.,␈α
because

␈↓ α∧␈↓we're␈αhard␈α
put␈αto␈α
define␈αthese␈α
things.␈α We␈α
understand␈αhuman␈α
mental␈αprocesses␈α
only␈αslightly␈α
better

␈↓ α∧␈↓than a fish understands swimming.

␈↓ α∧␈↓␈↓ αTCurrent␈α↔AI␈α⊗approaches␈α↔to␈α↔ascribing␈α⊗specific␈α↔mental␈α↔qualities␈α⊗use␈α↔the␈α↔symbolism␈α⊗of

␈↓ α∧␈↓mathematical␈αlogic.␈α In␈αthat␈αsymbolism,␈αspeaking␈αtechnically,␈αa␈αsuitable␈αcollection␈αof␈αfunctions␈αand

␈↓ α∧␈↓predicates␈αmust␈αbe␈α
given.␈α Certain␈αformulas␈αof␈α
this␈αlogic␈αare␈α
then␈αaxioms␈αgiving␈αrelations␈α
between

␈↓ α∧␈↓the␈αconcepts␈αand␈α
conditions␈αfor␈αascribing␈αthem.␈α
 These␈αaxioms␈αare␈α
used␈αby␈αreasoning␈αprograms␈α
as

␈↓ α∧␈↓part␈α
of␈αthe␈α
process␈αwhereby␈α
the␈αprogram␈α
decides␈αwhat␈α
to␈αdo.␈α
 The␈αformalisms␈α
require␈α
too␈αmuch

␈↓ α∧␈↓explanation to be included in this article, but some of the criteria are easily given in English.

␈↓ α∧␈↓␈↓ αTBeliefs␈α⊂and␈α⊂goals␈α⊂are␈α⊂ascribed␈α⊂in␈α⊂accordance␈α⊂with␈α⊂the␈α⊂the␈α⊂principle␈α⊂of␈α⊂rationality.␈α⊂ Our

␈↓ α∧␈↓object␈αis␈αto␈αaccount␈αfor␈αas␈αmuch␈αbehavior␈αas␈αpossible␈αby␈αsaying␈αthe␈αmachine␈αor␈αperson␈αor␈αanimal

␈↓ α∧␈↓does␈αwhat␈αit␈αthinks␈α
will␈αachieve␈αits␈αgoals.␈α
 It␈αis␈αespecially␈αimportant␈αto␈α
have␈αwhat␈αis␈αcalled␈α
in␈αAI

␈↓ α∧␈↓and␈α∃␈↓↓epistemologically␈α∃adequate␈↓␈α∃system.␈α∃ Namely,␈α∃the␈α∃language␈α∃must␈α∃be␈α∃able␈α∃to␈α∃express␈α∃the

␈↓ α∧␈↓information␈αour␈αprogram␈αcan␈αactually␈αget␈αabout␈αa␈αperson's␈αor␈αmachine's␈α"state␈αof␈αmind"␈α-␈αnot␈αjust
␈↓ α∧␈↓␈↓ u5


␈↓ α∧␈↓what␈αmight␈α
be␈αobtainable␈α
if␈αthe␈α
neurophysiology␈αof␈α
the␈αhuman␈α
or␈αthe␈α
design␈αof␈α
the␈αmachine␈α
were

␈↓ α∧␈↓more accessible.

␈↓ α∧␈↓␈↓ αTIn␈α
general␈α
we␈α
cannot␈α
give␈αdefinitions,␈α
because␈α
the␈α
concepts␈α
form␈αa␈α
system␈α
that␈α
we␈α
fit␈α
as␈αa

␈↓ α∧␈↓whole␈α∩to␈α∩phenomena.␈α∩ Similarly␈α∩the␈α∩physicist␈α⊃doesn't␈α∩give␈α∩a␈α∩definition␈α∩of␈α∩electron␈α∩or␈α⊃quark.

␈↓ α∧␈↓Electron and quark are terms in a complicated theory that predicts the results of experiments.

␈↓ α∧␈↓␈↓ αTIndeed␈α
common␈α
sense␈α
psychology␈α
works␈α
in␈α
the␈α
same␈α
way.␈α
 A␈α
child␈α
learns␈α
to␈α
ascribe␈αwants

␈↓ α∧␈↓and beliefs to others in a complex way that he never learns to encapsulate in definitions.

␈↓ α∧␈↓␈↓ αTNevertheless␈αwe␈αcan␈αgive␈αapproximate␈αcriteria␈αfor␈αsome␈αspecific␈αproperties␈αrelating␈αthem␈αto

␈↓ α∧␈↓the more implicit properties of believing and wanting.

␈↓ α∧␈↓␈↓↓Intends␈↓␈α-␈αWe␈αsay␈αthat␈αa␈αmachine␈αintends␈αto␈αdo␈αsomething␈αif␈αwe␈αcan␈αregard␈αit␈αas␈αbelieving␈αthat␈αit

␈↓ α∧␈↓will␈αattempt␈αto␈αdo␈αit.␈α We␈αmay␈αknow␈αsomething␈αthat␈αwill␈αdeter␈αit␈αfrom␈αmaking␈αthe␈αattempt.␈α Like

␈↓ α∧␈↓most␈αmental␈αconcepts,␈α
intention␈αis␈αan␈α
intermediate␈αin␈αthe␈α
causal␈αchain;␈αan␈α
intention␈αmay␈αbe␈α
caused

␈↓ α∧␈↓by␈αa␈αvariety␈αof␈αstimuli␈αand␈αpredispositions␈αand␈αmay␈αresult␈αin␈αaction␈αor␈αbe␈αfrustrated␈αin␈αa␈αvariety

␈↓ α∧␈↓of ways.

␈↓ α∧␈↓␈↓↓Tries␈↓␈α-␈αThis␈αis␈αimportant␈αin␈αunderstanding␈αmachines␈αthat␈αhave␈αa␈αvariety␈αof␈αways␈αof␈αachieving␈αa

␈↓ α∧␈↓goal␈α∞including␈α∂possibly␈α∞ways␈α∂that␈α∞we␈α∞don't␈α∂know␈α∞about.␈α∂ If␈α∞the␈α∞machine␈α∂may␈α∞do␈α∂something␈α∞we

␈↓ α∧␈↓don't␈αknow␈αabout␈αbut␈αthat␈αcan␈αlater␈αbe␈αexplained␈αin␈αrelation␈αto␈αa␈αgoal,␈αwe␈αhave␈αno␈αchoice␈αbut␈αto

␈↓ α∧␈↓use "is trying" or some synonym to explain the behavior.

␈↓ α∧␈↓␈↓↓Likes␈↓␈α∀-␈α∪As␈α∀in␈α∪"A␈α∀likes␈α∪B".␈α∀ This␈α∪involves␈α∀a␈α∪wanting␈α∀B's␈α∪welfare.␈α∀ It␈α∪requires␈α∀that␈α∀A␈α∪be

␈↓ α∧␈↓sophisticated enough to have a concept of B's welfare.

␈↓ α∧␈↓␈↓↓Self-consciousness␈↓␈α-␈α
Self␈αconsciousness␈αis␈α
perhaps␈αthe␈αmost␈α
interesting␈αmental␈αquality␈α
to␈αhumans.

␈↓ α∧␈↓Human self consciousness involves at least the following:

␈↓ α∧␈↓1.␈α∞Facts␈α∞about␈α
the␈α∞person's␈α∞body␈α
as␈α∞a␈α∞physical␈α
object.␈α∞ This␈α∞permits␈α
reasoning␈α∞from␈α∞facts␈α
about

␈↓ α∧␈↓bodies␈αin␈α
general␈αto␈α
one's␈αown.␈α It␈α
also␈αpermits␈α
reasoning␈αfrom␈αfacts␈α
about␈αone's␈α
own␈αbody,␈αe.g.␈α
 its

␈↓ α∧␈↓momentum, to corresponding facts about other physical objects.
␈↓ α∧␈↓␈↓ u6


␈↓ α∧␈↓2.␈α
The␈α
ability␈α
to␈α
observe␈α
one's␈α
own␈α
mental␈α
processes␈α
and␈α
to␈α
form␈α
beliefs␈α
and␈α
wants␈α
about␈α
them.␈α
 A

␈↓ α∧␈↓person can wish he were smarter or didn't want a cigarette.

␈↓ α∧␈↓3. Facts about oneself as a having beliefs, wants, etc. among other similar beings.

␈↓ α∧␈↓␈↓ αTSome␈α∀of␈α∪the␈α∀above␈α∪attributes␈α∀of␈α∪human␈α∀self-consciousness␈α∪are␈α∀easy␈α∪to␈α∀program.␈α∪ For

␈↓ α∧␈↓example,␈αit␈αis␈αnot␈αhard␈αto␈αmake␈αa␈αprogram␈αlook␈αat␈αitself,␈αand␈αmany␈αAI␈αprograms␈αdo␈αlook␈αat␈αparts

␈↓ α∧␈↓of␈α∞themselves.␈α∞ Others␈α∞are␈α∞more␈α∞difficult.␈α∞ Also␈α∞animals␈α∞cannot␈α∞be␈α∞shown␈α∞to␈α∞have␈α∞more␈α∞than␈α∞a

␈↓ α∧␈↓few.␈α∩ Therefore,␈α∩many␈α∩present␈α∪and␈α∩future␈α∩programs␈α∩can␈α∪best␈α∩be␈α∩described␈α∩as␈α∪partially␈α∩self-

␈↓ α∧␈↓conscious.

␈↓ α∧␈↓␈↓ αTSuppose␈α
someone␈α
says,␈α
"The␈α
dog␈αwants␈α
to␈α
go␈α
out".␈α
 He␈αhas␈α
ascribed␈α
the␈α
mental␈α
quality␈αof

␈↓ α∧␈↓wanting␈α
to␈α
the␈α
dog␈α∞without␈α
claiming␈α
that␈α
the␈α∞dog␈α
thinks␈α
like␈α
a␈α
human␈α∞and␈α
can␈α
form␈α
out␈α∞of␈α
its

␈↓ α∧␈↓parts the thought, "I want to go out".

␈↓ α∧␈↓␈↓ αTThe␈α
statement␈αisn't␈α
shorthand␈α
for␈αsomething␈α
the␈α
dog␈αdid,␈α
because␈α
there␈αare␈α
many␈α
ways␈αof

␈↓ α∧␈↓knowing␈αthat␈αa␈αdog␈αwants␈αto␈αgo␈αout.␈α It␈αalso␈αisn't␈αshorthand␈αfor␈αa␈αspecific␈αprediction␈αof␈αwhat␈αthe

␈↓ α∧␈↓dog␈α
is␈α
likely␈α
to␈α
do␈α
next.␈α
 Nor␈α
do␈α
we␈α
know␈α
enough␈α
about␈α
the␈α
physiology␈α
of␈α
dogs␈α
for␈α
it␈α
to␈α∞be␈α
an

␈↓ α∧␈↓abbreviation␈α∩for␈α⊃some␈α∩statement␈α∩about␈α⊃the␈α∩dog's␈α∩nervous␈α⊃system.␈α∩ It␈α∩is␈α⊃useful␈α∩because␈α∩of␈α⊃its

␈↓ α∧␈↓connection␈α∂with␈α∂all␈α∂of␈α∂these␈α∂things␈α∂and␈α∂because␈α∞what␈α∂it␈α∂says␈α∂about␈α∂the␈α∂dog␈α∂corresponds␈α∂in␈α∞an

␈↓ α∧␈↓informative␈α
way␈αwith␈α
similar␈αstatements␈α
about␈αpeople.␈α
 It␈αdoesn't␈α
commit␈αthe␈α
person␈αwho␈α
said␈αit␈α
to

␈↓ α∧␈↓an␈αelaborate␈αview␈αof␈αthe␈αmind␈αof␈αa␈αdog.␈α For␈αexample,␈αit␈αdoesn't␈αcommit␈αa␈αperson␈αto␈αany␈αposition

␈↓ α∧␈↓about␈αwhether␈αthe␈αdog␈αhas␈αthe␈αmental␈αmachinery␈αto␈αknow␈αthat␈αit␈αis␈αa␈αdog␈αor␈αeven␈αto␈αknow␈αthat␈α
it

␈↓ α∧␈↓wants to go out.  We can make similar statements about machines.

␈↓ α∧␈↓␈↓ αTHere␈α⊂is␈α⊂an␈α⊂extract␈α⊃from␈α⊂the␈α⊂instructions␈α⊂that␈α⊂came␈α⊃with␈α⊂an␈α⊂electric␈α⊂blanket.␈α⊃ ␈↓↓"Place␈α⊂the

␈↓ α∧␈↓↓control␈α
near␈α
the␈αbed␈α
in␈α
a␈α
place␈αthat␈α
is␈α
neither␈α
hotter␈αnor␈α
colder␈α
than␈α
the␈αroom␈α
itself.␈α
 If␈α
the␈αcontrol␈α
is

␈↓ α∧␈↓↓placed␈α
on␈αa␈α
radiator␈αor␈α
radiant␈αheated␈α
floors,␈α
it␈αwill␈α
"think"␈αthe␈α
entire␈αroom␈α
is␈αhot␈α
and␈α
will␈αlower

␈↓ α∧␈↓↓your␈α
blanket␈α
temperature,␈α
making␈αyour␈α
bed␈α
too␈α
cold.␈α If␈α
the␈α
control␈α
is␈α
placed␈αon␈α
the␈α
window␈α
sill␈αin␈α
a

␈↓ α∧␈↓↓cold draft, it will "think" the entire room is cold and will heat up your bed so it will be too hot."␈↓
␈↓ α∧␈↓␈↓ u7


␈↓ α∧␈↓␈↓ αTI␈α∞suppose␈α
some␈α∞philosophers,␈α∞psychologists,␈α
and␈α∞English␈α∞teachers␈α
would␈α∞maintain␈α∞that␈α
the

␈↓ α∧␈↓blanket␈α∂manufacturer␈α∂is␈α∞guilty␈α∂of␈α∂anthropomorphism␈α∞and␈α∂some␈α∂will␈α∞claim␈α∂that␈α∂great␈α∂harm␈α∞can

␈↓ α∧␈↓come␈αfrom␈α
thus␈αascribing␈αto␈α
machines␈αqualities␈α
which␈αonly␈αhumans␈α
can␈αhave.␈α
 I␈αargue␈αthat␈α
saying

␈↓ α∧␈↓that␈α∞the␈α∞blanket␈α
thermostat␈α∞"thinks"␈α∞is␈α∞ok;␈α
they␈α∞could␈α∞even␈α∞have␈α
left␈α∞off␈α∞the␈α∞quotes.␈α
 Moreover,

␈↓ α∧␈↓this␈α
helps␈αus␈α
understand␈α
how␈αthe␈α
thermostat␈α
works.␈α The␈α
example␈α
is␈αextreme,␈α
because␈αmost␈α
people

␈↓ α∧␈↓don't␈α∞need␈α∞the␈α∞word␈α
"think"␈α∞to␈α∞understand␈α∞how␈α
a␈α∞thermostatic␈α∞control␈α∞works.␈α∞ Nevertheless,␈α
the

␈↓ α∧␈↓blanket manufacturer was probably right in thinking that it would help some users.

␈↓ α∧␈↓␈↓ αTKeep␈α⊃in␈α⊃mind␈α⊃that␈α∩the␈α⊃thermostat␈α⊃can␈α⊃only␈α⊃be␈α∩properly␈α⊃considered␈α⊃to␈α⊃have␈α∩just␈α⊃three

␈↓ α∧␈↓possible␈αthoughts␈αor␈αbeliefs.␈α It␈αmay␈αbelieve␈αthat␈αthe␈αroom␈αis␈αtoo␈αhot,␈αor␈αthat␈αit␈αis␈αtoo␈αcold,␈αor␈αthat

␈↓ α∧␈↓it is ok.  It has no other beliefs; for example, it does not believe that it is a thermostat.

␈↓ α∧␈↓␈↓ αTThe␈α
example␈α
of␈α
the␈α
thermostat␈αis␈α
a␈α
very␈α
simple␈α
one.␈α If␈α
we␈α
had␈α
only␈α
thermostats␈α
to␈αthink

␈↓ α∧␈↓about,␈αwe␈αwouldn't␈αbother␈αwith␈αthe␈αconcept␈αof␈αbelief␈αat␈αall.␈α And␈αif␈αall␈αwe␈αwanted␈αto␈αthink␈αabout

␈↓ α∧␈↓were zero and one, we wouldn't bother with the concept of number.

␈↓ α∧␈↓␈↓ αTHere's␈α∞a␈α
somewhat␈α∞fanciful␈α∞example␈α
of␈α∞a␈α
machine␈α∞that␈α∞might␈α
someday␈α∞be␈α∞encountered␈α
in

␈↓ α∧␈↓daily life with more substantial mental qualities.

␈↓ α∧␈↓␈↓ αTIn␈α
ten␈α
or␈α
twenty␈α
years␈α
Minneapolis-Honeywell,␈α
which␈α
makes␈α
many␈α
thermostats␈α
today,␈αmay

␈↓ α∧␈↓try␈α∞to␈α∞sell␈α∞you␈α∞a␈α∞really␈α∞fancy␈α∂home␈α∞temperature␈α∞control␈α∞system.␈α∞ It␈α∞will␈α∞know␈α∞the␈α∂preferences␈α∞of

␈↓ α∧␈↓temperature␈α⊂and␈α∂humidity␈α⊂of␈α∂each␈α⊂member␈α∂of␈α⊂the␈α∂family␈α⊂and␈α∂can␈α⊂detect␈α∂who␈α⊂is␈α∂in␈α⊂the␈α∂room.

␈↓ α∧␈↓When␈α
several␈αare␈α
in␈αthe␈α
room␈αit␈α
makes␈αwhat␈α
it␈αconsiders␈α
a␈αcompromise␈α
adjustment␈αtaking␈α
account

␈↓ α∧␈↓who␈α∂has␈α∂most␈α∂recently␈α∂had␈α∂to␈α∂suffer␈α∂having␈α∂the␈α∂room␈α∂climate␈α∂different␈α∂from␈α∂what␈α⊂he␈α∂prefers.

␈↓ α∧␈↓Perhaps␈α∂Honeywell␈α∂discovers␈α∂that␈α∂these␈α∂compromises␈α∂should␈α∂be␈α∂modified␈α∂according␈α∂to␈α⊂a␈α∂social

␈↓ α∧␈↓rank␈α∞formula␈α∞devised␈α∞by␈α∞its␈α∞psychologists␈α
and␈α∞determined␈α∞by␈α∞patterns␈α∞of␈α∞speech␈α∞loudness.␈α
 The

␈↓ α∧␈↓brochure␈α⊂describing␈α⊃how␈α⊂the␈α⊃thing␈α⊂works␈α⊃is␈α⊂rather␈α⊂lengthy␈α⊃and␈α⊂the␈α⊃real␈α⊂dope␈α⊃is␈α⊂in␈α⊃a␈α⊂rather

␈↓ α∧␈↓technical appendix in small print.
␈↓ α∧␈↓␈↓ u8


␈↓ α∧␈↓␈↓ αTNow␈α
imagine␈α
that␈α
I␈α
went␈α
on␈α
about␈α
this␈α
thermostat␈α
until␈α
you␈α
were␈α
bored␈α
and␈α
you␈α
skipped

␈↓ α∧␈↓the␈α
rest␈α
of␈α
the␈α
paragraph.␈α
 Confronted␈α
with␈α
an␈α
uncomfortable␈α
room␈α
you␈α
form␈α
any␈α
of␈α
the␈α
following

␈↓ α∧␈↓hypotheses depending on what other information you had.

␈↓ α∧␈↓1.␈α∞It's␈α∂trying␈α∞to␈α∂do␈α∞the␈α∂right␈α∞thing,␈α∂but␈α∞it␈α∂can't␈α∞because␈α∂the␈α∞valve␈α∂is␈α∞stuck.␈α∂ But␈α∞then␈α∂it␈α∞should

␈↓ α∧␈↓complain.

␈↓ α∧␈↓2.␈α
It␈α
regards␈α
Grandpa␈α
as␈α
more␈α
important␈α
than␈α
me,␈α
and␈α
it␈α
is␈α
keeping␈α
the␈α
room␈α
hot␈α
in␈α
case␈α
he␈α
comes

␈↓ α∧␈↓in.

␈↓ α∧␈↓3. It confuses me with Grandpa.

␈↓ α∧␈↓4. It has forgotten what climate I like.

␈↓ α∧␈↓␈↓ αTA␈α∂child␈α∂unable␈α∂to␈α∂read␈α∂the␈α∂appendix␈α∂to␈α∂the␈α∂user's␈α∂manual␈α∂will␈α∂be␈α∂able␈α∂to␈α∂understand␈α∞a

␈↓ α∧␈↓description␈αof␈αthe␈α"climate␈αcontroller"␈αin␈αmental␈αterms.␈α The␈αchild␈αwill␈αbe␈αable␈αto␈αrequest␈αchanges

␈↓ α∧␈↓like␈α␈↓↓"Tell␈αit␈αI␈αlike␈αit␈αhotter"␈αor␈α"Tell␈αit␈αGrandpa's␈αnot␈αhere␈αnow␈↓.␈α Indeed␈αthe␈αdesigner␈αof␈αthe␈αsystem

␈↓ α∧␈↓will have used the mental terms in formulating the design specifications.

␈↓ α∧␈↓␈↓ αTThe␈αautomatic␈αteller␈αis␈αanother␈αexample.␈α It␈αhas␈αbeliefs␈αlike,␈α"There's␈αenough␈αmoney␈αin␈αthe

␈↓ α∧␈↓account,"␈αand␈α"I␈α
don't␈αgive␈αout␈αthat␈α
much␈αmoney".␈α A␈αmore␈α
elaborate␈αautomatic␈αteller␈αthat␈α
handles

␈↓ α∧␈↓loans,␈αloan␈α
payments,␈αtraveler's␈α
checks,␈αand␈α
so␈αforth,␈αmay␈α
have␈αbeliefs␈α
like,␈α"The␈α
payment␈αwasn't

␈↓ α∧␈↓made on time," or, "This person is a good credit risk."

␈↓ α∧␈↓␈↓ αTThe␈α∞next␈α
example␈α∞is␈α
adapted␈α∞from␈α
the␈α∞University␈α
of␈α∞California␈α
philosopher␈α∞John␈α
Searle.

␈↓ α∧␈↓A␈α∪person␈α∪who␈α∪doesn't␈α∪know␈α∪Chinese␈α∩memorizes␈α∪a␈α∪book␈α∪of␈α∪rules␈α∪for␈α∪manipulating␈α∩Chinese

␈↓ α∧␈↓characters.␈α The␈αrules␈αtell␈α
him␈αhow␈αto␈αextract␈α
certain␈αparts␈αof␈αa␈α
sequence␈αof␈αcharacters,␈αhow␈αto␈α
re-

␈↓ α∧␈↓arrange␈α⊂them,␈α⊂and␈α⊂how␈α⊂finally␈α⊂to␈α⊂send␈α∂back␈α⊂another␈α⊂sequence␈α⊂of␈α⊂characters.␈α⊂ These␈α⊂rules␈α∂say

␈↓ α∧␈↓nothing␈α
about␈α
the␈α
meaning␈α
of␈α
the␈α
characters,␈α∞just␈α
how␈α
to␈α
compute␈α
with␈α
them.␈α
 He␈α∞is␈α
repeatedly

␈↓ α∧␈↓given␈αChinese␈αsentences,␈αto␈αwhich␈αhe␈αapplies␈αthe␈αrules,␈αand␈αgives␈αback␈αwhat␈αturn␈αout,␈αbecause␈αof

␈↓ α∧␈↓the␈αclever␈α
rules,␈αto␈α
be␈αChinese␈α
sentences␈αthat␈α
are␈αappropriate␈α
replies.␈α We␈α
suppose␈αthat␈α
the␈αrules
␈↓ α∧␈↓␈↓ u9


␈↓ α∧␈↓result␈αin␈αa␈αChinese␈α
conversation␈αso␈αintelligent␈αthat␈αthe␈α
person␈αgiving␈αand␈αreceiving␈α
the␈αsentences

␈↓ α∧␈↓can't␈αtell␈αhim␈αfrom␈αan␈αintelligent␈αChinese.␈α This␈αis␈αanalogous␈αto␈αa␈αcomputer,␈αwhich␈αonly␈αobeys␈αits

␈↓ α∧␈↓programming␈α⊂language,␈α⊂but␈α⊂can␈α⊂be␈α⊂programmed␈α⊂such␈α∂that␈α⊂one␈α⊂can␈α⊂communicate␈α⊂with␈α⊂it␈α⊂in␈α∂a

␈↓ α∧␈↓different␈αprogramming␈αlanguage,␈αor␈αin␈αEnglish.␈α Searle␈αsays␈αthat␈αsince␈αthe␈αperson␈αin␈αthe␈αexample

␈↓ α∧␈↓doesn't␈α∂understand␈α∂Chinese␈α∂-␈α∞even␈α∂though␈α∂he␈α∂can␈α∞produce␈α∂intelligent␈α∂Chinese␈α∂conversation␈α∞by

␈↓ α∧␈↓following␈α∂rules␈α∂-␈α∂a␈α∂computer␈α∂cannot␈α∂be␈α∂said␈α∂to␈α∂"understand"␈α∂things.␈α∂ He␈α∂makes␈α∂no␈α∂distinction,

␈↓ α∧␈↓however,␈αbetween␈α
the␈αhardware␈α(the␈α
person)␈αand␈αthe␈α
process␈α(the␈α
set␈αof␈αrules).␈α
 I␈αwould␈αargue␈α
that

␈↓ α∧␈↓the␈α⊃set␈α⊃of␈α⊃rules␈α∩understands␈α⊃Chinese,␈α⊃and,␈α⊃analogously,␈α⊃a␈α∩computer␈α⊃program␈α⊃may␈α⊃be␈α∩said␈α⊃to

␈↓ α∧␈↓understand␈α⊃things,␈α⊃even␈α⊃if␈α⊃the␈α⊃computer␈α⊃does␈α⊃not.␈α⊃ Both␈α⊃Searle␈α⊃and␈α⊃I␈α⊃are␈α⊃ignoring␈α⊃practical

␈↓ α∧␈↓difficulties like how long it would take a person with a rule book to come up with a reply.

␈↓ α∧␈↓␈↓ αTDaniel␈α∃Dennett,␈α∀Tufts␈α∃University␈α∃philosopher,␈α∀has␈α∃proposed␈α∀three␈α∃attitudes␈α∃aimed␈α∀at

␈↓ α∧␈↓understanding a system with which one interacts.

␈↓ α∧␈↓␈↓ αTThe␈αfirst␈αhe␈αcalls␈αthe␈αphysical␈αstance.␈α In␈αthis␈αwe␈αlook␈αat␈αthe␈αsystem␈αin␈αterms␈αof␈αits␈αphysical

␈↓ α∧␈↓structure␈α∂at␈α∞various␈α∂levels␈α∞of␈α∂organization.␈α∞ Its␈α∂parts␈α∞have␈α∂their␈α∞properties␈α∂and␈α∞they␈α∂interact␈α∞in

␈↓ α∧␈↓ways␈α∞that␈α
we␈α∞know␈α∞about.␈α
 In␈α∞principle␈α∞the␈α
analysis␈α∞can␈α∞go␈α
down␈α∞to␈α∞the␈α
atoms␈α∞and␈α∞their␈α
parts.

␈↓ α∧␈↓Looking␈α∂at␈α∞a␈α∂thermostat␈α∞from␈α∂this␈α∞point␈α∂of␈α∞view,␈α∂we'd␈α∞want␈α∂to␈α∞understand␈α∂the␈α∞working␈α∂of␈α∞the

␈↓ α∧␈↓bimetal␈α∩strip␈α∩that␈α∩most␈α∩thermostats␈α∩use.␈α∩ For␈α⊃the␈α∩automatic␈α∩teller,␈α∩we'd␈α∩want␈α∩to␈α∩know␈α⊃about

␈↓ α∧␈↓integrated circuitry, for one thing.  (Let's hope no one's in line behind us while we do this).

␈↓ α∧␈↓␈↓ αTThe␈α⊃second␈α⊃is␈α⊃called␈α⊃the␈α⊃design␈α⊃stance.␈α∩ In␈α⊃this␈α⊃we␈α⊃analyze␈α⊃something␈α⊃in␈α⊃terms␈α∩of␈α⊃the

␈↓ α∧␈↓purpose␈α
for␈α
which␈α
it␈α
is␈α
designed.␈α
 Dennett's␈α
example␈α
of␈α
this␈α
is␈α
the␈α
alarm␈α
clock.␈α
 We␈α
can␈αusually

␈↓ α∧␈↓figure␈α
out␈α∞what␈α
an␈α
alarm␈α∞clock␈α
will␈α
do,␈α∞e.g.␈α
 when␈α
it␈α∞will␈α
go␈α
off,␈α∞without␈α
knowing␈α
whether␈α∞it␈α
is

␈↓ α∧␈↓made␈α
of␈α
springs␈αand␈α
gears␈α
or␈αof␈α
inegrated␈α
circuits.␈α The␈α
user␈α
of␈αalarm␈α
clock␈α
typically␈αdoesn't␈α
know

␈↓ α∧␈↓or␈αcare␈α
much␈αabout␈αits␈α
internal␈αstructure,␈αand␈α
this␈αinformation␈αwouldn't␈α
be␈αof␈αmuch␈α
use.␈α Notice

␈↓ α∧␈↓that␈αwhen␈α
an␈αalarm␈αclock␈α
breaks,␈αits␈α
repair␈αrequires␈αtaking␈α
the␈αphysical␈α
stance.␈α The␈αdesign␈α
stance
␈↓ α∧␈↓␈↓ f10


␈↓ α∧␈↓can␈αusefully␈α
be␈αapplied␈αto␈α
a␈αthermostat␈α-␈α
it␈αshouldn't␈αbe␈α
too␈αhard␈αto␈α
figure␈αout␈αhow␈α
to␈αset␈α
it,␈αno

␈↓ α∧␈↓matter how it works.  With the automatic teller, things are a little less clear.

␈↓ α∧␈↓␈↓ αTThe␈α⊃design␈α⊃stance␈α⊃is␈α⊃appropriate␈α⊃not␈α⊃only␈α∩for␈α⊃machinery␈α⊃but␈α⊃also␈α⊃for␈α⊃the␈α⊃parts␈α∩of␈α⊃an

␈↓ α∧␈↓organism.␈α∞ It␈α
is␈α∞amusing␈α
that␈α∞we␈α
can't␈α∞attribute␈α
a␈α∞purpose␈α
for␈α∞the␈α
existence␈α∞of␈α
ants,␈α∞but␈α∞we␈α
can

␈↓ α∧␈↓find a purpose for the glands in an ant that emit a chemical substance for other ants to follow.

␈↓ α∧␈↓␈↓ αTThe␈α_third␈α↔is␈α_called␈α↔the␈α_intentional␈α↔stance,␈α_and␈α↔this␈α_is␈α↔what␈α_we'll␈α↔often␈α_need␈α↔for

␈↓ α∧␈↓understanding␈α∂computer␈α⊂programs.␈α∂ In␈α∂this␈α⊂we␈α∂try␈α∂to␈α⊂understand␈α∂the␈α∂behavior␈α⊂of␈α∂a␈α⊂system␈α∂by

␈↓ α∧␈↓ascribing␈α⊂to␈α⊂it␈α⊂beliefs,␈α⊂goals,␈α⊂intentions,␈α⊂likes␈α⊂and␈α⊂dislikes,␈α⊂and␈α⊂other␈α⊂mental␈α⊂qualities.␈α⊂ In␈α⊂this

␈↓ α∧␈↓stance␈α
we␈α
ask␈α∞ourselves␈α
what␈α
the␈α
thermostat␈α∞thinks␈α
is␈α
going␈α
on,␈α∞what␈α
the␈α
automatic␈α∞teller␈α
wants

␈↓ α∧␈↓from␈α
us␈αbefore␈α
it'll␈α
give␈αus␈α
cash.␈α
 We␈αsay␈α
things␈α
like,␈α␈↓↓"The␈α
store's␈α
billing␈αcomputer␈α
wants␈α
me␈αto␈α
pay

␈↓ α∧␈↓↓up,␈αso␈αit␈αintends␈αto␈αfrighten␈αme␈αby␈αsending␈αme␈αthreatening␈αletters"␈↓.␈α The␈αintentional␈αstance␈αis␈αmost

␈↓ α∧␈↓useful when it is the only way of expressing what we know about a system.

␈↓ α∧␈↓␈↓ αT(For␈α∞variety␈α∞Dennett␈α∞mentions␈α
the␈α∞astrological␈α∞stance.␈α∞ In␈α∞this␈α
the␈α∞way␈α∞to␈α∞think␈α∞about␈α
the

␈↓ α∧␈↓future␈α∞of␈α
a␈α∞human␈α
is␈α∞to␈α
pay␈α∞attention␈α
to␈α∞the␈α
configuration␈α∞of␈α
the␈α∞stars␈α
when␈α∞he␈α
was␈α∞born.␈α
 To

␈↓ α∧␈↓determine␈αwhether␈αan␈αenterprise␈αwill␈αsucceed␈αwe␈αdetermine␈αwhether␈αthe␈αsigns␈αare␈αfavorable.␈α
 This

␈↓ α∧␈↓stance is clearly distinct from the others - and worthless.)

␈↓ α∧␈↓␈↓ αTIt␈αis␈αeasiest␈αto␈αunderstand␈αthe␈αascription␈αof␈αthoughts␈αto␈αmachines␈αin␈αcircumstances␈αwhen␈αwe

␈↓ α∧␈↓also␈α
understand␈α
the␈α
machine␈α
in␈α
physical␈αterms.␈α
 However,␈α
the␈α
payoff␈α
comes␈α
when␈α
either␈αno-one␈α
or

␈↓ α∧␈↓only an expert understands the machine physically.

␈↓ α∧␈↓␈↓ αTHowever,␈α⊂we␈α∂must␈α⊂be␈α⊂careful␈α∂not␈α⊂to␈α⊂ascribe␈α∂properties␈α⊂to␈α⊂a␈α∂machine␈α⊂that␈α⊂the␈α∂particular

␈↓ α∧␈↓machine␈αdoesn't␈αhave.␈α We␈αhumans␈αcan␈αeasily␈αfool␈αourselves␈αwhen␈αthere␈αis␈αsomething␈αwe␈αwant␈αto

␈↓ α∧␈↓believe.

␈↓ α∧␈↓␈↓ αTThe␈αmental␈αqualities␈α
of␈αpresent␈αmachines␈α
are␈αnot␈αthe␈α
same␈αas␈αours.␈α
 While␈αwe␈αwill␈α
probably

␈↓ α∧␈↓be␈αable,␈αin␈αthe␈αfuture,␈αto␈αmake␈α
machines␈αwith␈αmental␈αqualities␈αmore␈αlike␈αour␈αown,␈α
we'll␈αprobably
␈↓ α∧␈↓␈↓ f11


␈↓ α∧␈↓never␈αwant␈αto␈αdeal␈αwith␈αmachines␈αthat␈αare␈αtoo␈αmuch␈αlike␈αus.␈α Who␈αwants␈αto␈αdeal␈αwith␈αa␈α
computer

␈↓ α∧␈↓that␈α∂loses␈α∂its␈α⊂temper,␈α∂or␈α∂an␈α⊂automatic␈α∂teller␈α∂that␈α∂falls␈α⊂in␈α∂love?␈α∂Computers␈α⊂will␈α∂end␈α∂up␈α⊂the␈α∂the

␈↓ α∧␈↓psychology␈αthat␈αis␈α
convenient␈αto␈αtheir␈αdesigners␈α
-␈α(and␈αthey'll␈αbe␈α
fascist␈αbastards␈αif␈αthose␈α
designers

␈↓ α∧␈↓don't␈αthink␈α
twice).␈α Program␈α
designers␈αhave␈αa␈α
tendency␈αto␈α
think␈αof␈αthe␈α
users␈αas␈α
idiots␈αwho␈αneed␈α
to

␈↓ α∧␈↓be␈α
controlled.␈α∞ They␈α
should␈α∞rather␈α
think␈α
of␈α∞their␈α
program␈α∞as␈α
a␈α
servant,␈α∞whose␈α
master,␈α∞the␈α
user,

␈↓ α∧␈↓should␈α⊂be␈α⊂able␈α⊂to␈α⊂control␈α⊂it.␈α⊂ If␈α⊂designers␈α⊂and␈α⊂programmers␈α⊂think␈α⊂about␈α⊂the␈α⊂apparent␈α∂mental

␈↓ α∧␈↓qualities␈α∞that␈α∞their␈α∞programs␈α∞will␈α∞have,␈α∞they'll␈α∞create␈α∞programs␈α∞that␈α∞are␈α∞easier␈α∞and␈α∞pleasanter␈α∞-

␈↓ α∧␈↓more humane - to deal with.

␈↓ α∧␈↓References:

␈↓ α∧␈↓␈↓αMcCarthy,␈αJohn␈α(1979)␈↓:␈α
"Ascribing␈αMental␈αQualities␈αto␈α
Machines"␈αin␈α␈↓↓Philosophical␈αPerspectives␈α
in

␈↓ α∧␈↓↓Artificial␈α
Intelligence␈↓,␈α
Ringle,␈α
Martin␈α(ed.),␈α
Harvester␈α
Press,␈α
July␈α1979.␈α
 This␈α
is␈α
the␈αtechnical␈α
paper

␈↓ α∧␈↓on which this article is based.

␈↓ α∧␈↓␈↓αMcCarthy,␈α∪John␈α∩(1979)␈↓:␈α∪"First␈α∩Order␈α∪Theories␈α∩of␈α∪Individual␈α∩Concepts␈α∪and␈α∪Propositions",␈α∩in

␈↓ α∧␈↓Michie,␈α
Donald␈α
(ed.)␈α
␈↓↓Machine␈α
Intelligence␈α∞9␈↓,␈α
(University␈α
of␈α
Edinburgh␈α
Press,␈α∞Edinburgh).␈α
 This

␈↓ α∧␈↓paper expresses some facts about knowledge in first order logic.

␈↓ α∧␈↓␈↓αKowalski,␈α
Robert␈α∞(19xx)␈↓:␈α
␈↓↓Logic␈α
for␈α∞Problem␈α
Solving␈↓,␈α
xxx␈α∞This␈α
book␈α
describes␈α∞the␈α
use␈α∞of␈α
logical

␈↓ α∧␈↓formalism in artificial intelligence.

␈↓ α∧␈↓␈↓αNewell,␈α
Allen␈α
(1980)␈↓:␈α
␈↓↓The␈α
Logic␈α
Level␈↓␈α∞xxx␈α
This␈α
article␈α
clearly␈α
expounds␈α
a␈α
different␈α∞approach␈α
to

␈↓ α∧␈↓ascribing mental qualities.

␈↓ α∧␈↓␈↓αDennett,␈αDaniel␈α
(198x)␈↓:␈αHerbert␈αSpencer␈α
lectures,␈αaccurate␈αreference␈α
to␈αfollow.␈α This␈α
non-technical

␈↓ α∧␈↓article describes the physical, design and intentional stances in philosophical language.

␈↓ α∧␈↓␈↓αSearle,␈αJohn␈α
(198x)␈↓:␈αxxx␈α
This␈αarticle␈α
takes␈αthe␈αpoint␈α
of␈αview␈α
that␈αmental␈α
qualities␈αshould␈α
not␈αbe

␈↓ α∧␈↓ascribed to machines.